我们呈现FURTIT,这是一种简单的3D形状分割网络的高效学习方法。FURTIT基于自我监督的任务,可以将3D形状的表面分解成几何基元。可以很容易地应用于用于3D形状分割的现有网络架构,并提高了几张拍摄设置中的性能,因为我们在广泛使用的ShapEnet和Partnet基准中展示。FISHIT在这种环境中优于现有的现有技术,表明对基元的分解是在学习对语义部分预测的陈述之前的有用。我们提出了许多实验,改变了几何基元和下游任务的选择,以证明该方法的有效性。
translated by 谷歌翻译
Deep neural networks (DNN) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predictions by pushing down the confidence of the winning class while increasing the confidence of the remaining classes across all test samples. However, from a deployment perspective, an ideal model is desired to (i) generate well-calibrated predictions for high-confidence samples with predicted probability say >0.95, and (ii) generate a higher proportion of legitimate high-confidence samples. To this end, we propose a novel regularization technique that can be used with classification losses, leading to state-of-the-art calibrated predictions at test time; From a deployment standpoint in safety-critical applications, only high-confidence samples from a well-calibrated model are of interest, as the remaining samples have to undergo manual inspection. Predictive confidence reduction of these potentially ``high-confidence samples'' is a downside of existing calibration approaches. We mitigate this by proposing a dynamic train-time data pruning strategy that prunes low-confidence samples every few epochs, providing an increase in "confident yet calibrated samples". We demonstrate state-of-the-art calibration performance across image classification benchmarks, reducing training time without much compromise in accuracy. We provide insights into why our dynamic pruning strategy that prunes low-confidence training samples leads to an increase in high-confidence samples at test time.
translated by 谷歌翻译
This study concerns the formulation and application of Bayesian optimal experimental design to symbolic discovery, which is the inference from observational data of predictive models taking general functional forms. We apply constrained first-order methods to optimize an appropriate selection criterion, using Hamiltonian Monte Carlo to sample from the prior. A step for computing the predictive distribution, involving convolution, is computed via either numerical integration, or via fast transform methods.
translated by 谷歌翻译
Real-world tasks are largely composed of multiple models, each performing a sub-task in a larger chain of tasks, i.e., using the output from a model as input for another model in a multi-model pipeline. A model like MATRa performs the task of Crosslingual Transliteration in two stages, using English as an intermediate transliteration target when transliterating between two indic languages. We propose a novel distillation technique, EPIK, that condenses two-stage pipelines for hierarchical tasks into a single end-to-end model without compromising performance. This method can create end-to-end models for tasks without needing a dedicated end-to-end dataset, solving the data scarcity problem. The EPIK model has been distilled from the MATra model using this technique of knowledge distillation. The MATra model can perform crosslingual transliteration between 5 languages - English, Hindi, Tamil, Kannada and Bengali. The EPIK model executes the task of transliteration without any intermediate English output while retaining the performance and accuracy of the MATra model. The EPIK model can perform transliteration with an average CER score of 0.015 and average phonetic accuracy of 92.1%. In addition, the average time for execution has reduced by 54.3% as compared to the teacher model and has a similarity score of 97.5% with the teacher encoder. In a few cases, the EPIK model (student model) can outperform the MATra model (teacher model) even though it has been distilled from the MATra model.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
A machine learning model, under the influence of observed or unobserved confounders in the training data, can learn spurious correlations and fail to generalize when deployed. For image classifiers, augmenting a training dataset using counterfactual examples has been empirically shown to break spurious correlations. However, the counterfactual generation task itself becomes more difficult as the level of confounding increases. Existing methods for counterfactual generation under confounding consider a fixed set of interventions (e.g., texture, rotation) and are not flexible enough to capture diverse data-generating processes. Given a causal generative process, we formally characterize the adverse effects of confounding on any downstream tasks and show that the correlation between generative factors (attributes) can be used to quantitatively measure confounding between generative factors. To minimize such correlation, we propose a counterfactual generation method that learns to modify the value of any attribute in an image and generate new images given a set of observed attributes, even when the dataset is highly confounded. These counterfactual images are then used to regularize the downstream classifier such that the learned representations are the same across various generative factors conditioned on the class label. Our method is computationally efficient, simple to implement, and works well for any number of generative factors and confounding variables. Our experimental results on both synthetic (MNIST variants) and real-world (CelebA) datasets show the usefulness of our approach.
translated by 谷歌翻译
类比推理问题挑战了连接主义者和符号AI系统,因为这些系统需要将背景知识,推理和模式识别的结合。符号系统摄入显式域知识并执行演绎推理,但它们对噪声敏感,并且需要输入以预设符号特征。另一方面,Connectionist系统可以直接摄入丰富的输入空间,例如图像,文本或语音,即使使用嘈杂的输入也可以识别模式。但是,Connectionist模型努力将明确的领域知识用于演绎推理。在本文中,我们提出了一个框架,将神经网络的模式识别能力与象征性推理和背景知识结合在一起,以解决一类类似推理问题,其中一组属性和可能的​​关系是已知的。我们从“神经算法推理”方法[DeepMind 2020]中汲取灵感,并通过(i)基于问题的象征模型学习分布式表示(ii)培训神经网络转化反映了关系的分布式表示形式。参与问题,最后(iii)培训神经网络编码器,从图像到(i)中的分布式表示。这三个要素使我们能够使用神经网络作为操纵分布式表示的基本功能执行基于搜索的推理。我们在乌鸦渐进式矩阵中的视觉类比问题上进行了测试,并在人类绩效中实现准确性竞争,在某些情况下,优于初始端到端神经网络方法的方法。尽管最近接受大规模训练的神经模型产生了SOTA,但我们的新型神经符号推理方法是该问题的有希望的方向,可以说是更笼统的,尤其是对于可用的域知识的问题。
translated by 谷歌翻译
图像分类模型通常会学会根据输入功能与培训数据中输出类之间的无关共发生进行预测类。我们称不需要的相关性为“数据偏见”,视觉特征导致数据偏见为“偏见因素”。在没有人类干预的情况下自动识别和减轻偏见是一个挑战。因此,我们进行了一项设计研究,以找到人类的循环解决方案。首先,我们确定了用三个专家捕获图像分类模型的偏差缓解过程的用户任务。然后,为了支持任务,我们开发了一个名为DASH的视觉分析系统,该系统允许用户在视觉上识别偏见因素,使用最先进的图像到图像到图像转换模型迭代生成合成图像,并监督改善分类精度的模型培训过程。我们对十名参与者的定量评估和定性研究证明了破折号的实用性,并为将来的工作提供了教训。
translated by 谷歌翻译
传感器对于机器人车辆(RV)中的自动操作至关重要。对传感器篡改或欺骗等传感器的物理攻击可以通过物理通道为RV提供错误的值,从而导致任务失败。在本文中,我们介绍了DeLorean,这是一个综合诊断和恢复框架,用于保护自动RV免受身体攻击。我们考虑了一种强烈的物理攻击形式,称为传感器欺骗攻击(SDA),其中对手同时靶向不同类型的多个传感器(甚至包括所有传感器)。在SDA下,Delorean检查攻击引起的错误,标识目标传感器,并防止错误的传感器输入在RV的反馈控制环中使用。 Delorean在反馈控制循环中重播历史性状态信息,并从攻击中恢复RV。我们对四个真实和两个模拟的RV的评估表明,DeLorean可以从不同的攻击中恢复RV,并确保在94%的情况下(平均)(平均而言)的任务成功,而不会发生任何崩溃。 Delorean会产生低性能,内存和电池开销。
translated by 谷歌翻译
我们提出了一种新型的复发图网络(RGN)方法,用于通过学习潜在的复杂随机过程来预测离散标记的事件序列。使用点过程的框架,我们将标记的离散事件序列解释为各种唯一类型的不同序列的叠加。图网络的节点使用LSTM来合并过去的信息,而图形注意力网络(GAT网络)引入了强烈的电感偏见,以捕获这些不同类型的事件之间的相互作用。通过更改自我注意力的机制从过去的事件中参加活动,我们可以从$ \ MATHCAL {O}(n^2)$(事件总数)到$ \ Mathcal的时间和空间复杂性降低{o}(| \ Mathcal {y} |^2)$(事件类型的数量)。实验表明,与最新的基于最新的变压器架构相比,所提出的方法可以提高对数可能具有较低时间和空间复杂性的对数可能具有较低时间和空间复杂性的任务的性能。
translated by 谷歌翻译